Goto

Collaborating Authors

 unintended harm


AI is already causing unintended harm. What happens when it falls into the wrong hands? David Evan Harris

The Guardian

A researcher was granted access earlier this year by Facebook's parent company, Meta, to incredibly potent artificial intelligence software – and leaked it to the world. As a former researcher on Meta's civic integrity and responsible AI teams, I am terrified by what could happen next. Though Meta was violated by the leak, it came out as the winner: researchers and independent coders are now racing to improve on or build on the back of LLaMA (Large Language Model Meta AI – Meta's branded version of a large language model or LLM, the type of software underlying ChatGPT), with many sharing their work openly with the world. This could position Meta as owner of the centrepiece of the dominant AI platform, much in the same way that Google controls the open-source Android operating system that is built on and adapted by device manufacturers globally. If Meta were to secure this central position in the AI ecosystem, it would have leverage to shape the direction of AI at a fundamental level, controlling both the experiences of individual users and setting limits on what other companies could and couldn't do.


AI Impact Statements - Empathy, Imperfection, and Responsibility

#artificialintelligence

If you follow the media stories about AI, you will see two schools of thought. One school is utopian, proclaiming the amazing power of AI, from predicting quantum electron paths to driving a race car like a champion. The other school is dystopian, scaring us with crisis-ridden stories that range from how AI could bring about the end of privacy to self-driving cars that almost immediately crash. One school of thought is outraged by imperfection, while the other lives in denial. But neither extreme view accurately represents our imperfect world. As Stephen Hawking said, "One of the basic rules of the universe is that nothing is perfect.


What is AI bias mitigation, and how can it improve AI fairness?

#artificialintelligence

Algorithmic bias is one of the AI industry's most prolific areas of scrutiny. Unintended systemic errors risk leading to unfair or arbitrary outcomes, elevating the need for standardized ethical and responsible technology -- especially as the AI market is expected to hit $110 billion by 2024. There are multiple ways AI can become biased and create harmful outcomes. First is the business processes itself that the AI is being designed to augment or replace. If those processes, the context, and who it is applied to is biased against certain groups, regardless of intent, then the resulting AI application will be biased as well.


A group of tech executives warn military about unintended harm caused by AI in combat

Daily Mail - Science & tech

This week, the Defense Innovation Board issued a series of recommendations to the Department of Defense on how artificial intelligence should be implemented in future military conflict. The Defense Innovation Board was first created in 2016 to establish a series of best practices on potential collaborations between the US military and Silicon Valley. There are sixteen current board members from a broad number of disciplines, including former Google CEO Eric Schmidt, Facebook executive Marne Levine, Microsoft's Chief Digital Officer Kurt Delbene, astrophysicist Neil deGrasse Tyson, Steve Jobs biographer Walter Isaacson, and LinkedIn co-founded Reid Hoffman. 'Now is the time, at this early stage of the resurgence of interest in AI, to hold serious discussions about norms of AI development and use in a military context--long before there has been an incident.' the report says. The report says that using AI for military actions or decision-making comes with'the duty to take feasible precautions to reduce the risk of harm to the civilian population and other protected persons and objects.'


You better explain yourself, mister: DARPA's mission to make an accountable AI

#artificialintelligence

The US government's mighty DARPA last year kicked off a research project designed to make systems controlled by artificial intelligence more accountable to their human users. The Defense Advanced Research Projects Agency, to use this $2.97bn agency its full name, is the Department of Defense's body responsible for emerging technology for use by the US armed forces. Significantly, it was DARPA's early funding of packet-switching network the Advanced Research Projects Agency Network (ARPANET) more than 40 years ago that helped bring about the internet. Coming bang up to date, the issue at the heart of the Explainable Artificial Intelligence (XAI) programme is that AI is starting to extend into many areas of everyday life yet the internal workings of such systems are often opaque, and could be concealing flaws in their decision-making processes. The field of AI has made great strides in the last several years, thanks to developments in machine learning algorithms and deep learning systems based on artificial neural networks (ANNs).


Google's five challenges facing artificial intelligence

#artificialintelligence

Artificial intelligence is either the bright shining future of technology or an insidious threat that could endanger all of mankind, depending on your point of view. Now Google, one of the companies leading the development of AI systems, has set out five key challenges that need to be overcome with the technology - but they are somewhat more mundane than robots rising up to take over the world. Instead, the company sees one of the key problems as being how to stop negative side effects, such as a cleaning robot that knocks over a precious vase to get its job done faster. Google has published a new research paper highlight five challenges it sees as needing to be overcome to prevent AI and robots causing unintended harm. It also says robots need to be programmed in a way so they do not'game the system' – such as simply covering mess in a room with a sheet it cannot see through rather than tidying up. Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?


Google's five challenges facing artificial intelligence

#artificialintelligence

Artificial intelligence is either the bright shining future of technology or an insidious threat that could endanger all of mankind, depending on your point of view. Now Google, one of the companies leading the development of AI systems, has set out five key challenges that need to be overcome with the technology - but they are somewhat more mundane than robots rising up to take over the world. Instead, the company sees one of the key problems as being how to stop negative side effects, such as a cleaning robot that knocks over a precious vase to get its job done faster. Google has published a new research paper highlight five challenges it sees as needing to be overcome to prevent AI and robots causing unintended harm. It also says robots need to be programmed in a way so they do not'game the system' – such as simply covering mess in a room with a sheet it cannot see through rather than tidying up. Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?


Forget killer robots! Droids could BORE you to death: Google outlines the five challenges facing artificial intelligence

Daily Mail - Science & tech

Artificial intelligence is either the bright shining future of technology or an insidious threat that could endanger all of mankind, depending on your point of view. Now Google, one of the companies leading the development of AI systems, has set out five key challenges that need to be overcome with the technology - but they are somewhat more mundane than robots rising up to take over the world. Instead, the company sees one of the key problems as being how to stop negative side effects, such as a cleaning robot that knocks over a precious vase to get its job done faster. Google has published a new research paper highlight five challenges it sees as needing to be overcome to prevent AI and robots causing unintended harm. It also says robots need to be programmed in a way so they do not'game the system' – such as simply covering mess in a room with a sheet it cannot see through rather than tidying up. Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?


Google's five challenges facing artificial intelligence

#artificialintelligence

Artificial intelligence is either the bright shining future of technology or an insidious threat that could endanger all of mankind, depending on your point of view. Now Google, one of the companies leading the development of AI systems, has set out five key challenges that need to be overcome with the technology - but they are somewhat more mundane than robots rising up to take over the world. Instead, the company sees one of the key problems as being how to stop negative side effects, such as a cleaning robot that knocks over a precious vase to get its job done faster. Google has published a new research paper highlight five challenges it sees as needing to be overcome to prevent AI and robots causing unintended harm. It also says robots need to be programmed in a way so they do not'game the system' – such as simply covering mess in a room with a sheet it cannot see through rather than tidying up.